Unifying Adversarial Training Algorithms with Flexible Deep Data Gradient Regularization

نویسندگان

  • Alexander Ororbia
  • C. Lee Giles
  • Daniel Kifer
چکیده

We present DataGrad, a general back-propagation style training procedure for deep neural architectures that uses regularization of a deep Jacobian-based penalty. It can be viewed as a deep extension of the layerwise contractive auto-encoder penalty. More importantly, it unifies previous proposals for adversarial training of deep neural nets – this list includes directly modifying the gradient, training on a mix of original and adversarial examples, using contractive penalties, and approximately optimizing constrained adversarial objective functions. In an experiment using a Deep Sparse Rectifier Network, we find that the deep Jacobian regularization of DataGrad (which also has L1 and L2 flavors of regularization) outperforms traditional L1 and L2 regularization both on the original dataset as well as on adversarial examples.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Unifying Adversarial Training Algorithms with Data Gradient Regularization

Many previous proposals for adversarial training of deep neural nets have included directly modifying the gradient, training on a mix of original and adversarial examples, using contractive penalties, and approximately optimizing constrained adversarial objective functions. In this article, we show that these proposals are actually all instances of optimizing a general, regularized objective we...

متن کامل

Wasserstein Distributional Robustness and Regularization in Statistical Learning

A central question in statistical learning is to design algorithms that not only perform well on training data, but also generalize to new and unseen data. In this paper, we tackle this question by formulating a distributionally robust stochastic optimization (DRSO) problem, which seeks a solution that minimizes the worstcase expected loss over a family of distributions that are close to the em...

متن کامل

Building Robust Deep Neural Networks for Road Sign Detection

Deep Neural Networks are built to generalize outside of training set in mind by using techniques such as regularization, early stopping and dropout. But considerations to make them more resilient to adversarial examples are rarely taken. As deep neural networks become more prevalent in mission critical and real time systems, miscreants start to attack them by intentionally making deep neural ne...

متن کامل

Virtual Adversarial Training: a Regularization Method for Supervised and Semi-supervised Learning

We propose a new regularization method based on virtual adversarial loss: a new measure of local smoothness of the output distribution. Virtual adversarial loss is defined as the robustness of the model’s posterior distribution against local perturbation around each input data point. Our method is similar to adversarial training, but differs from adversarial training in that it determines the a...

متن کامل

A Closer Look at Memorization in Deep Networks

We examine the role of memorization in deep learning, drawing connections to capacity, generalization, and adversarial robustness. While deep networks are capable of memorizing noise data, our results suggest that they tend to prioritize learning simple patterns first. In our experiments, we expose qualitative differences in gradient-based optimization of deep neural networks (DNNs) on noise vs...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • CoRR

دوره abs/1601.07213  شماره 

صفحات  -

تاریخ انتشار 2016